Citation-reason Annotation Using Crowdsourcing
نویسندگان
چکیده
منابع مشابه
Crowdsourcing data citation graphs using provenance
In this paper we describe a tool designed to support crowdsourcing a-posteori provenance information about the datasets used in research publications. It generates PROV data both to capture the data citation graphs—via an extension to the PROV Data Model, and the crowdsourcing process—via prov:bundles.
متن کاملUsing Crowdsourcing for Multi-label Biomedical Compound Figure Annotation
Information analysis or retrieval for images in the biomedical literature needs to deal with a large amount of compound figures (figures containing several subfigures), as they constitute probably more than half of all images in repositories such as PubMed Central, which was the data set used for the task. The ImageCLEFmed benchmark proposed among other tasks in 2015 and 2016 a multi-label clas...
متن کاملIterative Learning for Reliable Crowdsourcing Systems Citation
Crowdsourcing systems, in which tasks are electronically distributed to numerous “information piece-workers”, have emerged as an effective paradigm for humanpowered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must de...
متن کاملA Methodology for Corpus Annotation through Crowdsourcing
In contrast to expert-based annotation, for which elaborate methodologies ensure high quality output, currently no systematic guidelines exist for crowdsourcing annotated corpora, despite the increasing popularity of this approach. To address this gap, we define a crowd-based annotation methodology, compare it against the OntoNotes methodology for expert-based annotation, and identify future ch...
متن کاملCrowdsourcing Annotation of Non-Local Semantic Roles
This paper reports on a study of crowdsourcing the annotation of non-local (or implicit) frame-semantic roles, i.e., roles that are realized in the previous discourse context. We describe two annotation setups (marking and gap filling) and find that gap filling works considerably better, attaining an acceptable quality relatively cheaply. The produced data is available for research
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: DEStech Transactions on Computer Science and Engineering
سال: 2017
ISSN: 2475-8841
DOI: 10.12783/dtcse/aita2017/16000